3,108 research outputs found

    On Iterated Dominance, Matrix Elimination, and Matched Paths

    Get PDF
    We study computational problems arising from the iterated removal of weakly dominated actions in anonymous games. Our main result shows that it is NP-complete to decide whether an anonymous game with three actions can be solved via iterated weak dominance. The two-action case can be reformulated as a natural elimination problem on a matrix, the complexity of which turns out to be surprisingly difficult to characterize and ultimately remains open. We however establish connections to a matching problem along paths in a directed graph, which is computationally hard in general but can also be used to identify tractable cases of matrix elimination. We finally identify different classes of anonymous games where iterated dominance is in P and NP-complete, respectively.Comment: 12 pages, 3 figures, 27th International Symposium on Theoretical Aspects of Computer Science (STACS

    Sublinear growth of the corrector in stochastic homogenization: Optimal stochastic estimates for slowly decaying correlations

    Get PDF
    We establish sublinear growth of correctors in the context of stochastic homogenization of linear elliptic PDEs. In case of weak decorrelation and "essentially Gaussian" coefficient fields, we obtain optimal (stretched exponential) stochastic moments for the minimal radius above which the corrector is sublinear. Our estimates also capture the quantitative sublinearity of the corrector (caused by the quantitative decorrelation on larger scales) correctly. The result is based on estimates on the Malliavin derivative for certain functionals which are basically averages of the gradient of the corrector, on concentration of measure, and on a mean value property for aa-harmonic functions

    Simplicity-Expressiveness Tradeoffs in Mechanism Design

    Get PDF
    A fundamental result in mechanism design theory, the so-called revelation principle, asserts that for many questions concerning the existence of mechanisms with a given outcome one can restrict attention to truthful direct revelation-mechanisms. In practice, however, many mechanism use a restricted message space. This motivates the study of the tradeoffs involved in choosing simplified mechanisms, which can sometimes bring benefits in precluding bad or promoting good equilibria, and other times impose costs on welfare and revenue. We study the simplicity-expressiveness tradeoff in two representative settings, sponsored search auctions and combinatorial auctions, each being a canonical example for complete information and incomplete information analysis, respectively. We observe that the amount of information available to the agents plays an important role for the tradeoff between simplicity and expressiveness

    Using Bayes theorem to estimate positive and negative predictive values for continuously and ordinally scaled diagnostic tests

    Get PDF
    Objectives: Positive predictive values (PPVs) and negative predictive values (NPVs) are frequently reported to put estimates of accuracy of a diagnostic test in clinical context and to obtain risk estimates for a given patient taking into account baseline prevalence in the population. In order to calculate PPV and NPV, tests with ordinally or continuously scaled results are commonly dichotomized at the expense of a loss of information. Methods: Extending the rationale for the calculation of PPV and NPV, Bayesian theorem is used to calculate the probability of disease given the outcome of a continuously or ordinally scaled test. Probabilities of test results conditional on disease status are modeled in a Bayesian framework and subsequently transformed to probabilities of disease status conditional on test result. Results Using publicly available data, probability of a clinical depression diagnosis given PROMIS Depression scores was estimated. Comparison with PPV and NPV based on dichotomized scores shows that a more fine-grained interpretation of test scores is possible. Conclusions: The proposed method bears the chance to facilitate accurate and meaningful interpretation of test results in clinical settings by avoiding unnecessary dichotomization of test scores

    Sum of Us: Strategyproof Selection from the Selectors

    Full text link
    We consider directed graphs over a set of n agents, where an edge (i,j) is taken to mean that agent i supports or trusts agent j. Given such a graph and an integer k\leq n, we wish to select a subset of k agents that maximizes the sum of indegrees, i.e., a subset of k most popular or most trusted agents. At the same time we assume that each individual agent is only interested in being selected, and may misreport its outgoing edges to this end. This problem formulation captures realistic scenarios where agents choose among themselves, which can be found in the context of Internet search, social networks like Twitter, or reputation systems like Epinions. Our goal is to design mechanisms without payments that map each graph to a k-subset of agents to be selected and satisfy the following two constraints: strategyproofness, i.e., agents cannot benefit from misreporting their outgoing edges, and approximate optimality, i.e., the sum of indegrees of the selected subset of agents is always close to optimal. Our first main result is a surprising impossibility: for k \in {1,...,n-1}, no deterministic strategyproof mechanism can provide a finite approximation ratio. Our second main result is a randomized strategyproof mechanism with an approximation ratio that is bounded from above by four for any value of k, and approaches one as k grows

    Vom Instrument zum Konstrukt - standardisierte Messung gesundheitsbezogener Lebensqualität

    Get PDF
    Die Erhebung patientenberichteter Endpunkte ist in klinischer Forschung und Versorgung gleichermaßen relevant. Die gegenwärtig genutzten Instrumente zur Erhebung dieser Endpunkte sind allerdings wenig standardisiert, was die Vergleichbarkeit der erhobenen Daten einschränkt. Methoden der Item-Response Theory bieten die Möglichkeit, eine Standardisierung der Erhebung sowohl über Instrumente als auch über Sprachen zu erreichen. Im Rahmen dieser Habilitationsschrift wird zum einen die Entwicklung und Validierung einer konstruktbasierten, instrumentenunabhängigen Skala auf Basis eines probabilistischen Testmodells zur Erhebung von Depressivität beschrieben. Dabei zeigte sich, dass eine Messung der latenten Variable Depressivität mit verschiedenen Instrumenten möglich ist, Modellparameter in unabhängigen Stichproben angewendet werden können und Bayesianische Methoden genutzt werden können, um Modellparameter anhand neu erhobener Daten zu aktualisieren. Zum zweiten wird die Vergleichbarkeit von PROMIS Instrumenten über verschiedene Sprachen untersucht. Dabei zeigt sich, dass sowohl für die PROMIS Itembank zur Erhebung von Depressivität als auch im PROMIS Profile 29, das die zentralen Gesundheitsdomänen vereint, die Unterschiede in Itemparametern über Sprachen vernachlässigbar gering sind. Somit ist ein Vergleich der Testwerte über verschiedene Sprachversionen der getesteten Instrumente hinweg valide. Weitere Befunde bestätigen die Validität konstruktbasierter Skalen und auch die Ergebnisse zur Vergleichbarkeit über Sprachen wurden für andere Domänen beziehungsweise Sprachen berichtet. Eine Perspektive zur Weiterentwicklung der verwendeten Modelle sind Bayesianische IRT-Modelle, die klinisch irrelevante Unterschiede in den Itemparametern in verschiedenen Stichproben explizit modellieren können
    • …
    corecore